Goto

Collaborating Authors

 El Centro


QuesGenie: Intelligent Multimodal Question Generation

Mubarak, Ahmed, Ahmed, Amna, Nasser, Amira, Mohamed, Aya, El-Sadek, Fares, Ahmed, Mohammed, Salah, Ahmed, Sobhy, Youssef

arXiv.org Artificial Intelligence

--In today's information-rich era, learners have access to abundant educational resources, but the lack of practice materials tailored to these resources presents a significant challenge. This project addresses that gap by developing a multimodal question generation system that can automatically generate diverse question types from various content formats. This project lays the foundation for automated, scalable, and intelligent question generation, carefully balancing resource efficiency, robust functionality and a smooth user experience. Creating assessment questions is a time-consuming and labor-intensive task for educators. Traditional methods require manual extraction of information from materials, which can lead to inconsistencies and errors. Additionally, students often struggle to find varied practice questions that cover all aspects of the material they are studying. With the increasing use of multimedia in educational content, there is a growing need for systems that can process various data types, including text, diagrams, and audio recordings.


AraDiCE: Benchmarks for Dialectal and Cultural Capabilities in LLMs

Mousi, Basel, Durrani, Nadir, Ahmad, Fatema, Hasan, Md. Arid, Hasanain, Maram, Kabbani, Tameem, Dalvi, Fahim, Chowdhury, Shammur Absar, Alam, Firoj

arXiv.org Artificial Intelligence

Arabic, with its rich diversity of dialects, remains significantly underrepresented in Large Language Models, particularly in dialectal variations. We address this gap by introducing seven synthetic datasets in dialects alongside Modern Standard Arabic (MSA), created using Machine Translation (MT) combined with human post-editing. We present AraDiCE, a benchmark for Arabic Dialect and Cultural Evaluation. We evaluate LLMs on dialect comprehension and generation, focusing specifically on low-resource Arabic dialects. Additionally, we introduce the first-ever fine-grained benchmark designed to evaluate cultural awareness across the Gulf, Egypt, and Levant regions, providing a novel dimension to LLM evaluation. Our findings demonstrate that while Arabic-specific models like Jais and AceGPT outperform multilingual models on dialectal tasks, significant challenges persist in dialect identification, generation, and translation. This work contributes ~45K post-edited samples, a cultural benchmark, and highlights the importance of tailored training to improve LLM performance in capturing the nuances of diverse Arabic dialects and cultural contexts. We will release the dialectal translation models and benchmarks curated in this study.


Crafting Large Language Models for Enhanced Interpretability

Sun, Chung-En, Oikarinen, Tuomas, Weng, Tsui-Wei

arXiv.org Artificial Intelligence

We introduce the Concept Bottleneck Large Language Model (CB-LLM), a pioneering approach to creating inherently interpretable Large Language Models (LLMs). Unlike traditional black-box LLMs that rely on post-hoc interpretation methods with limited neuron function insights, CB-LLM sets a new standard with its built-in interpretability, scalability, and ability to provide clear, accurate explanations. This innovation not only advances transparency in language models but also enhances their effectiveness. Our unique Automatic Concept Correction (ACC) strategy successfully narrows the performance gap with conventional black-box LLMs, positioning CB-LLM as a model that combines the high accuracy of traditional LLMs with the added benefit of clear interpretability -- a feature markedly absent in existing LLMs.